To determine the significance of a test result, one must integrate statistical test characteristics (sensitivity, specificity, predictive values) with the clinical context (pretest probability, prevalence, and patient presentation).
Before ordering or interpreting any test, ask:
How likely is the disease before testing?
What are the patient’s symptoms, signs, and risk factors?
This estimate — based on your clinical judgment or known disease prevalence in the population — is called the pretest probability.
Example: In an elderly smoker with hematuria, the pretest probability of bladder cancer is much higher than in a 20-year-old.
Sensitivity = Probability that the test is positive if the disease is present.
(High sensitivity → good for ruling out disease when negative).
Specificity = Probability that the test is negative if the disease is absent.
(High specificity → good for ruling in disease when positive).
These are intrinsic properties of the test — they do not depend on disease prevalence.
After knowing the test result, update your estimate of disease probability using Likelihood Ratios (LRs):
Likelihood Ratios (LRs) are primarily used to assess the value of a diagnostic test and to change your pre-test probability (your initial suspicion of a disease) into a post-test probability (the probability of the disease after the test result).1
The LR compares the likelihood of a test result in a person with the disease to the likelihood of the same result in a person without the disease.2 The further the LR is from 1, the more useful the test result is.3
| LR Value | Interpretation | Clinical Effect |
| LR > 10 | Strong evidence for the disease | Large increase in post-test probability (Rule In) |
| LR 5-10 | Moderate evidence for the disease | Moderate increase in post-test probability |
| LR 2-5 | Small evidence for the disease | Small increase in post-test probability |
| LR ≈ 1 | No useful information | No change in post-test probability |
| LR 0.2-0.5 | Small evidence against the disease | Small decrease in post-test probability |
| LR 0.1-0.2 | Moderate evidence against the disease | Moderate decrease in post-test probability |
| LR < 0.1 | Strong evidence against the disease | Large decrease in post-test probability (Rule Out) |
There are two main types, one for a positive test and one for a negative test:4
Positive Likelihood Ratio (LR+): The LR for a positive test result.5
This tells you how much the odds of having the disease increase when the test is positive.6 A high LR+ is good for ruling in a disease.7
Negative Likelihood Ratio (LR-): The LR for a negative test result.8
This tells you how much the odds of having the disease decrease when the test is negative.9 A low LR- (closer to 0) is good for ruling out a disease.10
The LRs are used in a variation of Bayes' Theorem to convert your initial probability estimate (Pre-test Probability) into a final one (Post-test Probability):11
Since clinicians typically think in terms of probability (percentage) rather than odds, the general steps are:
Estimate Pre-test Probability: This is your best estimate of the patient having the disease before the test is done, based on risk factors, symptoms, and clinical experience.12
Convert Probability to Odds:
Calculate Post-test Odds: Multiply your Pre-test Odds by the appropriate Likelihood Ratio (LR+ for a positive result, LR- for a negative result).13
Convert Odds back to Probability:
Alternatively, clinicians often use a graphical tool called a Fagan Nomogram to quickly convert pre-test probability to post-test probability without performing the odds conversions .
This video provides an explanation of how to use likelihood ratios, including the calculation of pre-test and post-test probabilities: Likelihood Ratios Explained .
Would you like to walk through a specific clinical example using a Likelihood Ratio?
[
\text{Likelihood Ratio (Positive)} = \frac{\text{Sensitivity}}{1 - \text{Specificity}}
]
[
\text{Likelihood Ratio (Negative)} = \frac{1 - \text{Sensitivity}}{\text{Specificity}}
]
Then, use the Fagan nomogram or the following formula to estimate Post-Test Probability:
[
\text{Post-test odds} = \text{Pre-test odds} \times \text{Likelihood Ratio}
]
[
\text{Post-test probability} = \frac{\text{Post-test odds}}{1 + \text{Post-test odds}}
]
This gives you the probability that the patient truly has (or doesn’t have) the disease after testing.
Positive Predictive Value (PPV) = Probability the disease is present when the test is positive.
Negative Predictive Value (NPV) = Probability the disease is absent when the test is negative.
Unlike sensitivity/specificity, PPV and NPV depend on prevalence:
High disease prevalence → higher PPV, lower NPV.
Low disease prevalence → lower PPV, higher NPV.
Example:
A positive HIV test in a high-risk population (high prevalence) is far more likely to be a true positive than in a low-risk setting.
Finally, weigh:
The post-test probability,
The potential consequences of false positives or negatives,
The patient’s overall risk and comorbidities.
Then decide:
Is the test result actionable (does it change management)?
Should you confirm with a more specific test?
Should you treat empirically or observe?
Example:
A positive D-dimer in a low-risk young woman (low pretest probability) is likely a false positive — proceed cautiously.
The same result in a post-op patient with dyspnea (high pretest probability) strongly supports PE → proceed to imaging.
| Concept | Definition | Role in Interpretation |
|---|---|---|
| Prevalence / Pretest Probability | Likelihood before testing | Sets the baseline |
| Sensitivity | P(T+ | Disease) |
| Specificity | P(T− | No disease) |
| PPV / NPV | P(Disease | Test result) |
| Likelihood Ratios | How much test changes odds | Quantifies diagnostic impact |
| Clinical Context | Symptoms, risk, consequences | Determines meaningfulness |
Would you like me to show a numerical example (e.g., how a test with 90% sensitivity/specificity behaves at different prevalence levels)?
It’s the best way to see how context changes the interpretation.
++++++++++++++++++++++++++++++++++++++++++++++++++++++
Perfect — let’s go through a numerical example that shows how the same test can mean very different things depending on prevalence (pretest probability).
You’re interpreting a diagnostic test for Disease X.
The test has:
Sensitivity: 90%
Specificity: 90%
Imagine 10,000 people are tested.
| Status | # of people | Test + | Test − |
|---|---|---|---|
| Disease present (1%) | 100 | 90 (TP) | 10 (FN) |
| No disease (99%) | 9,900 | 990 (FP) | 8,910 (TN) |
Positive Predictive Value (PPV)
= True Positives / All Positives = 90 / (90 + 990) = 8.3%
Negative Predictive Value (NPV)
= True Negatives / All Negatives = 8,910 / (8,910 + 10) = 99.9%
✅ Interpretation:
A positive test means there’s only about an 8% chance the patient actually has the disease (most are false positives).
A negative test is very reassuring — 99.9% chance they don’t have it.
Conclusion: The test is useful to rule out, but not to confirm, disease when prevalence is low.
Now, in a higher-risk population — say, symptomatic patients — prevalence is 10%.
| Status | # of people | Test + | Test − |
|---|---|---|---|
| Disease present (10%) | 1,000 | 900 (TP) | 100 (FN) |
| No disease (90%) | 9,000 | 900 (FP) | 8,100 (TN) |
PPV = 900 / (900 + 900) = 50%
NPV = 8,100 / (8,100 + 100) = 98.8%
✅ Interpretation:
Now a positive result has a 50-50 chance of being true.
A negative test remains reliable for ruling out.
Now test a group with strong clinical suspicion (e.g., classic symptoms).
| Status | # of people | Test + | Test − |
|---|---|---|---|
| Disease present (50%) | 5,000 | 4,500 (TP) | 500 (FN) |
| No disease (50%) | 5,000 | 500 (FP) | 4,500 (TN) |
PPV = 4,500 / (4,500 + 500) = 90%
NPV = 4,500 / (4,500 + 500) = 90%
✅ Interpretation:
In this high-prevalence group, a positive result is very convincing (90%).
A negative test is less reassuring (could still miss 10%).
| Prevalence | PPV | NPV | Clinical Use |
|---|---|---|---|
| Low (1%) | 8% | 99.9% | Great to rule out, poor to confirm |
| Moderate (10%) | 50% | 98.8% | Useful both ways, but confirm positives |
| High (50%) | 90% | 90% | Great to confirm, may need repeat if negative |
Low prevalence (screening) → negative test rules out disease.
High prevalence (diagnostic setting) → positive test rules in disease.
Always combine with history, physical exam, and risk factors to set pretest probability before interpreting results.
Would you like me to show how to use Likelihood Ratios (LR+ and LR−) and Fagan’s nomogram to calculate post-test probability numerically for one of these cases?
That’s the formal way clinicians apply Bayes’ theorem in practice.
+++++++++++++++++++++++++++++++++++++
Great — let’s do the Likelihood Ratio (LR) method + Bayes’ math step-by-step, and apply it to the three prevalence examples we used earlier (1%, 10%, 50%). I’ll show the exact arithmetic so you can follow every digit.
Sensitivity = 0.90
Specificity = 0.90
Likelihood ratio positive (LR+) = Sens / (1 − Spec)
= 0.90 / (1 − 0.90)
= 0.90 / 0.10
= 9.0
Likelihood ratio negative (LR−) = (1 − Sens) / Spec
= (1 − 0.90) / 0.90
= 0.10 / 0.90
= 0.111111111... (repeating) = 1/9
Odds = p / (1 − p)
Probability = odds / (1 + odds)
We will use these to go from pretest probability → post-test odds (= pretest odds × LR) → post-test probability.
Pretest odds = 0.01 / (1 − 0.01) = 0.01 / 0.99
= 1 / 99 ≈ 0.0101010101...
If test is POSITIVE:
Post-odds = Pre-odds × LR+ = (1/99) × 9 = 9/99 = 1/11 ≈ 0.0909090909...
Post-probability = (1/11) / (1 + 1/11) = (1/11) / (12/11) = 1/12 = 0.0833333333 = 8.33%
If test is NEGATIVE:
LR− = 1/9. Post-odds = (1/99) × (1/9) = 1/891 ≈ 0.001122334...
Post-probability = (1/891) / (1 + 1/891) = (1/891) / (892/891) = 1/892 ≈ 0.00112108 = 0.1121%
Interpretation (1% prevalence):
Positive test → ~8.3% chance disease (most positives are false positives).
Negative test → ~0.11% chance disease (very reassuring).
Pretest odds = 0.10 / 0.90 = 1/9 ≈ 0.111111111...
If POSITIVE:
Post-odds = (1/9) × 9 = 1
Post-probability = 1 / (1 + 1) = 1/2 = 0.50 = 50%
If NEGATIVE:
Post-odds = (1/9) × (1/9) = 1/81 ≈ 0.012345679
Post-probability = (1/81) / (1 + 1/81) = (1/81) / (82/81) = 1/82 ≈ 0.012195122 = 1.22%
Interpretation (10% prevalence):
Positive test → 50% chance disease (use confirmatory testing or consider management).
Negative test → ~1.2% chance disease (rules out fairly well).
Pretest odds = 0.50 / 0.50 = 1
If POSITIVE:
Post-odds = 1 × 9 = 9
Post-probability = 9 / (1 + 9) = 9/10 = 0.90 = 90%
If NEGATIVE:
Post-odds = 1 × (1/9) = 1/9 ≈ 0.111111111
Post-probability = (1/9) / (1 + 1/9) = (1/9) / (10/9) = 1/10 = 0.10 = 10%
Interpretation (50% prevalence):
Positive test → 90% chance disease (good to rule in).
Negative test → 10% chance disease (a negative result still leaves a meaningful residual risk).
LR+ = 9 means a positive test makes disease ~9× more likely in odds space; that’s a strong increase but not absolute.
LR− = 1/9 (~0.11) means a negative test reduces the odds to about one-ninth; that’s powerful for ruling out in low/moderate pretest settings.
The same test result has very different meaning at different pretest probabilities — that’s why clinical context (history, risk factors, prevalence) matters.
Use LRs and the odds conversion when you want a precise, quantitative update (clinically useful for high-stakes decisions).
In routine practice clinicians often approximate using heuristics (low/medium/high pretest probability), but for borderline cases, compute the LR update or use a Fagan nomogram.
I can:
Show a small table or chart comparing pre/post probabilities for a range of pretest probabilities (I can compute and display it here), or
Walk you through using a Fagan nomogram (I’ll explain how to read one), or
Provide a one-page printable worksheet where you put in Sens, Spec, pretest probability and it returns posttest probability for + and − results.
Which would you like next?
+++++++++++++++++++++++++++++++++
Excellent — the Fagan Nomogram is a classic and intuitive way to apply Bayes’ theorem without doing any math.
Let’s go through it clearly step by step — including what it is, how to use it, and what it tells you.
It’s a three-line chart that lets you estimate post-test probability using:
Pretest probability (your clinical judgment),
Likelihood ratio (LR+, LR−), and
Post-test probability (your interpretation).
It looks like this conceptually:
Pre-test probability Likelihood ratio Post-test probability | | | | | | (left vertical line) (middle line, log scale) (right line)
You draw a straight line connecting your pre-test probability (on the left) through the LR (middle) to find your post-test probability (right).
Let’s say we’re using the same test:
Sensitivity = 90%
Specificity = 90%
→ LR+ = 9, LR− = 0.11
Locate pretest probability (say, 10%) on the left scale.
Locate LR+ (9) on the middle scale.
Draw a straight line connecting 10% → 9 → across to the right scale.
You’ll land at about 50% post-test probability.
That means: in a patient whose pretest probability was 10%, a positive test increases the likelihood of disease to 50%.
Locate 10% on the left scale.
Locate LR− (0.11) on the middle scale.
Draw a line connecting them.
You’ll land at around 1% post-test probability — confirming that a negative test largely rules out the disease.
Although we can’t draw the actual nomogram lines here, imagine the scales like this:
Pretest Probability (%) LR (log scale) Posttest Probability (%) | | | 100| |100 100| 90| | 30 90| 80| | 10 80| 70| | 3 70| 60| | 1 60| 50| | 0.3 50| 40| | 0.1 40| 30| | 0.03 30| 20| | 0.01 20| 10| | 10| 5| | 5| 1| | 1|
Draw line from your pretest prob (left) through LR (middle) to get posttest prob (right).
| LR Value | Interpretation | Effect on Probability |
|---|---|---|
| >10 | Strong evidence to rule in | Big increase |
| 5–10 | Moderate evidence to rule in | Moderate increase |
| 2–5 | Weak evidence | Small increase |
| 1 | No diagnostic value | No change |
| 0.5–0.2 | Weak evidence to rule out | Small decrease |
| 0.2–0.1 | Moderate evidence to rule out | Moderate decrease |
| <0.1 | Strong evidence to rule out | Big decrease |
Our test (LR+ = 9, LR− = 0.11) sits right at the border of strong rule-in / rule-out power.
| Situation | Pretest Probability | Test Result | Post-test Probability | Interpretation |
|---|---|---|---|---|
| Screening in general population | 1% | Positive | 8% | False positives dominate |
| Symptomatic clinic | 10% | Positive | 50% | Indeterminate — confirm |
| High suspicion | 50% | Positive | 90% | Confirms diagnosis |
| High suspicion | 50% | Negative | 10% | Rules out in most, but not all |
If you want to use this method practically:
Online Fagan calculators:
You input pretest probability and LR+, LR− and it gives posttest probabilities instantly.
These mirror the graphical method exactly.
Would you like me to generate a graphical Fagan nomogram image (with labeled example lines for the 10% and 50% cases)?
It’ll visually show how the lines cross from pretest → LR → posttest.
A test result must be interpreted in the context of the clinical picture, disease prevalence, and the test's characteristics.
++++++++++++++
Sensitivity and specificity are crucial for understanding what a test result actually means, especially when screening for or diagnosing disease.
Sensitivity (True Positive Rate):
Specificity (True Positive Rate):
High Sensitivity Tests:
High Specificity Tests:
No test is perfect - improving sensitivity often decreases specificity and vice versa:
Example: PSA for Prostate Cancer
What you really want to know: "Given my positive/negative result, what's the probability I have/don't have the disease?"
This depends on prevalence (how common the disease is):
Positive Predictive Value (PPV):
Negative Predictive Value (NPV):
COVID-19 Testing:
Cancer Screening:
Troponin for Heart Attack:
When you get a test result, consider:
Is this a screening or diagnostic test?
What's my pre-test probability?
Does the result change management?
++++++++++++++++++
Laboratory results are positive or negative when compared to reference ranges,.1
In order to interprest a test properly, the ability of the test itself to be is truly positive or truly negative, is determined by the test's sensitivity and its specificity.
These definitions depend on determination of the presence or absence of the disease by criteria other than the laboratory test (e.g., through biopsy or final pathology report).
Interpretation of a Positive Result
A positive result is interpreted by knowing the Sensitivity of the test.
Positive results must be interpreted in terms of the sensitivity of a test.
Sensitivy is the ability of a test to test positive in the presence of a disease.
It is expressed in percent.
It is calculated by dividing the number of positive results in patients (TP) by the total number of patients (TP + FN).
Note:
_______________________________________________________________________________________________________________________________
Negative Results
Specificity
Negative results can be divided into true-negative (TN) results and false-negative (FN) results.
The determination is based on a gold standard such as the final pathology report
A FN result is a negative test result for a person has the disease.
Negative results must be interpreted in terms of the specificty of a test.
Specificity is defined as the ability of a test to be negative in the absence of disease; it is the percentage of known negatives who actually do not have the disease.
It is calculated by dividing the number of negative results in persons who do not have the disease (TN) by the total number of persons who do not have the disease (TN + FP).
References
1. Interpretation of Diagnostic Tests: Laboratory Statistics. In: Gupta MEP. eds. Board Review Series: Pathology, 6e. Lippincott Williams & Wilkins, a Wolters Kluwer business; 2021. Accessed September 24, 2025. https://brs-lwwhealthlibrary-com.usu01.idm.oclc.org/content.aspx?bookid=2896§ionid=243603933